#Fudan University07/05/2025
Fudan University Unveils Lorsa: Decoding Transformer Attention Superposition with Sparse Mechanisms
Fudan University researchers have developed Lorsa, a sparse attention mechanism that disentangles atomic attention units hidden in transformer superposition, enhancing interpretability of large language models.